Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
15th International Conference on COMmunication Systems and NETworkS, COMSNETS 2023 ; : 462-465, 2023.
Article in English | Scopus | ID: covidwho-2281703

ABSTRACT

Due to the Covid-19 pandemic, people have been forced to move to online spaces to attend classes or meetings and so on. The effectiveness of online classes depends on the engagement level of students. A straightforward way to monitor the engagement is to observe students' facial expressions, eye gazes, head gesticulations, hand movements, and body movements through their video feed. However, video-based engagement detection has limitations, such as being influenced by video backgrounds, lighting conditions, camera angles, unwillingness to open the camera, etc. In this work, we propose a non-intrusive mechanism of estimating engagement level by monitoring the head gesticulations through channel state information (CSI) of WiFi signals. First, we conduct an anonymous survey to investigate whether the head gesticulation pattern is correlated with engagement. We then develop models to recognize head gesticulations through CSI. Later, we plan to correlate the head gesticulation pattern with the instructor's intent to estimate the students' engagement. © 2023 IEEE.

2.
Multimed Tools Appl ; : 1-30, 2022 Sep 09.
Article in English | MEDLINE | ID: covidwho-2261661

ABSTRACT

The dramatic impact of the COVID-19 pandemic has resulted in the closure of physical classrooms and teaching methods being shifted to the online medium.To make the online learning environment more interactive, just like traditional offline classrooms, it is essential to ensure the proper engagement of students during online learning sessions.This paper proposes a deep learning-based approach using facial emotions to detect the real-time engagement of online learners. This is done by analysing the students' facial expressions to classify their emotions throughout the online learning session. The facial emotion recognition information is used to calculate the engagement index (EI) to predict two engagement states "Engaged" and "Disengaged". Different deep learning models such as Inception-V3, VGG19 and ResNet-50 are evaluated and compared to get the best predictive classification model for real-time engagement detection. Varied benchmarked datasets such as FER-2013, CK+ and RAF-DB are used to gauge the overall performance and accuracy of the proposed system. Experimental results showed that the proposed system achieves an accuracy of 89.11%, 90.14% and 92.32% for Inception-V3, VGG19 and ResNet-50, respectively, on benchmarked datasets and our own created dataset. ResNet-50 outperforms all others with an accuracy of 92.3% for facial emotions classification in real-time learning scenarios.

3.
Multimed Tools Appl ; : 1-27, 2023 Feb 10.
Article in English | MEDLINE | ID: covidwho-2241199

ABSTRACT

Due to the COVID-19 crisis, the education sector has been shifted to a virtual environment. Monitoring the engagement level and providing regular feedback during e-classes is one of the major concerns, as this facility lacks in the e-learning environment due to no physical observation of the teacher. According to present study, an engagement detection system to ensure that the students get immediate feedback during e-Learning. Our proposed engagement system analyses the student's behaviour throughout the e-Learning session. The proposed novel approach evaluates three modalities based on the student's behaviour, such as facial expression, eye blink count, and head movement, from the live video streams to predict student engagement in e-learning. The proposed system is implemented based on deep-learning approaches such as VGG-19 and ResNet-50 for facial emotion recognition and the facial landmark approach for eye-blinking and head movement detection. The results from different modalities (for which the algorithms are proposed) are combined to determine the EI (engagement index). Based on EI value, an engaged or disengaged state is predicted. The present study suggests that the proposed facial cues-based multimodal system accurately determines student engagement in real time. The experimental research achieved an accuracy of 92.58% and showed that the proposed engagement detection approach significantly outperforms the existing approaches.

4.
21st International Conference on Image Analysis and Processing, ICIAP 2022 ; 13233 LNCS:411-422, 2022.
Article in English | Scopus | ID: covidwho-1877767

ABSTRACT

Recognition of user interaction, in particular engagement detection, became highly crucial for online working and learning environments, especially during the COVID-19 outbreak. Such recognition and detection systems significantly improve the user experience and efficiency by providing valuable feedback. In this paper, we propose a novel Engagement Detection with Multi-Task Training (ED-MTT) system which minimizes mean squared error and triplet loss together to determine the engagement level of students in an e-learning environment. The performance of this system is evaluated and compared against the state-of-the-art on a publicly available dataset as well as videos collected from real-life scenarios. The results show that ED-MTT achieves 6 % lower MSE than the best state-of-the-art performance with highly acceptable training time and lightweight feature extraction. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

5.
Electronics ; 11(9):1500, 2022.
Article in English | ProQuest Central | ID: covidwho-1837603

ABSTRACT

With COVID-19, formal education was interrupted in all countries and the importance of distance learning has increased. It is possible to teach any lesson with various communication tools but it is difficult to know how far this lesson reaches to the students. In this study, it is aimed to monitor the students in a classroom or in front of the computer with a camera in real time, recognizing their faces, their head poses, and scoring their distraction to detect student engagement based on their head poses and Eye Aspect Ratios. Distraction was determined by associating the students’ attention with looking at the teacher or the camera in the right direction. The success of the face recognition and head pose estimation was tested by using the UPNA Head Pose Database and, as a result of the conducted tests, the most successful result in face recognition was obtained with the Local Binary Patterns method with a 98.95% recognition rate. In the classification of student engagement as Engaged and Not Engaged, support vector machine gave results with 72.4% accuracy. The developed system will be used to recognize and monitor students in the classroom or in front of the computer, and to determine the course flow autonomously.

6.
12th International Conference on Learning Analytics and Knowledge: Learning Analytics for Transition, Disruption and Social Change, LAK 2022 ; : 294-303, 2022.
Article in English | Scopus | ID: covidwho-1752916

ABSTRACT

This study presents a novel video recommendation system for an algebra virtual learning environment (VLE) that leverages ideas and methods from engagement measurement, item response theory, and reinforcement learning. Following Vygotsky's Zone of Proximal Development (ZPD) theory, but considering low affect and high affect students separately, we developed a system of five categories of video recommendations: 1) Watch new video;2) Review current topic video with a new tutor;3) Review segment of current video with current tutor;4) Review segment of current video with a new tutor;5) Watch next video in curriculum sequence. The category of recommendation was determined by student scores on a quiz and a sensor-free engagement detection model. New video recommendations (i.e., category 1) were selected based on a novel reinforcement learning algorithm that takes input from an item response theory model. The recommendation system was evaluated in a large field experiment, both before and after school closures due to the COVID-19 pandemic. The results show evidence of effectiveness of the video recommendation algorithm during the period of normal school operations, but the effect disappears after school closures. Implications for teacher orchestration of technology for normal classroom use and periods of school closure are discussed. © 2022 ACM.

7.
Comput Electr Eng ; 93: 107277, 2021 Jul.
Article in English | MEDLINE | ID: covidwho-1275234

ABSTRACT

The drastic impact of COVID-19 pandemic is visible in all aspects of our lives including education. With a distinctive rise in e-learning, teaching methods are being undertaken remotely on digital platforms due to COVID-19. To reduce the effect of this pandemic on the education sector, most of the educational institutions are already conducting online classes. However, to make these digital learning sessions interactive and comparable to the traditional offline classrooms, it is essential to ensure that students are properly engaged during online classes. In this paper, we have presented novel deep learning based algorithms that monitor the student's emotions in real-time such as anger, disgust, fear, happiness, sadness, and surprise. This is done by the proposed novel state-of-the-art algorithms which compute the Mean Engagement Score (MES) by analyzing the obtained results from facial landmark detection, emotional recognition and the weights from a survey conducted on students over an hour-long class. The proposed automated approach will certainly help educational institutions in achieving an improved and innovative digital learning method.

SELECTION OF CITATIONS
SEARCH DETAIL